Skip to content

docs: High Availability Setup documentation #2715

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

Rishab87
Copy link
Contributor

@Rishab87 Rishab87 commented Aug 3, 2025

What this PR does / why we need it:
Adds documentation for high availability setups

How does this change affect the cardinality of KSM: (increases, decreases or does not change cardinality)
does not change cardinality

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #2081

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Aug 3, 2025
@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Aug 3, 2025
@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Aug 4, 2025
@dgrisonnet
Copy link
Member

/assign
/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 7, 2025
README.md Outdated
@@ -304,6 +305,12 @@ spec:

Other metrics can be sharded via [Horizontal sharding](#horizontal-sharding).

### High Availability

For high availability, run multiple kube-state-metrics replicas with anti-affinity rules to prevent single points of failure. Configure 2 replicas, anti-affinity rules on hostname, rolling update strategy with `maxUnavailable: 1`, and a PodDisruptionBudget with `minAvailable: 1`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the value-add specifically for ksm here? I believe this would apply to most Kubernetes deployments if you want to run them highly available.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll update the documentation to explain that users must perform query-time deduplication

README.md Outdated

For high availability, run multiple kube-state-metrics replicas with anti-affinity rules to prevent single points of failure. Configure 2 replicas, anti-affinity rules on hostname, rolling update strategy with `maxUnavailable: 1`, and a PodDisruptionBudget with `minAvailable: 1`.

When using multiple replicas, Prometheus will scrape all instances resulting in duplicate metrics with different instance labels. Handle deduplication in queries using `avg without(instance) (metric_name)`. Brief inconsistencies may occur during state transitions but resolve quickly as replicas sync with the API server.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This depends on how you scrape the metrics with prometheus, on a pod level or a service level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah right, I'll update the section to clarify the differences.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Rishab87
Once this PR has been reviewed and has the lgtm label, please ask for approval from dgrisonnet. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Rishab87
Copy link
Contributor Author

Rishab87 commented Aug 8, 2025

@mrueg I've addressed reviews can you have a look again?

README.md Outdated
@@ -304,6 +305,12 @@ spec:

Other metrics can be sharded via [Horizontal sharding](#horizontal-sharding).

### High Availability

For high availability, run multiple kube-state-metrics replicas to prevent a single point of failure. A standard setup uses at least 2 replicas, pod anti-affinity rules to ensure they run on different nodes, and a PodDisruptionBudget (PDB) with `minAvailable: 1` to protect against voluntary disruptions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to add an introductory paragraph:

  • It should mention that multiple replica increase the load on the Kubernetes' API as a trade-off.

  • It should mention that most likely you don't need HA if you scrape every 30s and you can tolerate a few missing scrapes (which usually is the case)

  • Does a "standard" setup exist?

README.md Outdated

For high availability, run multiple kube-state-metrics replicas to prevent a single point of failure. A standard setup uses at least 2 replicas, pod anti-affinity rules to ensure they run on different nodes, and a PodDisruptionBudget (PDB) with `minAvailable: 1` to protect against voluntary disruptions.

When scraping the individual pods directly in an HA setup, Prometheus will ingest duplicate metrics distinguished only by the instance label. This requires you to deduplicate the data in your queries, for example, by using `max without(instance) (your_metric)`. The correct aggregation function (max, sum, avg, etc.) is important and depends on the metric type, as using the wrong one can produce incorrect values for timestamps or during brief state transitions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is Pod Scraping that common? I would assume most folks will scrape at the service level via a ServiceMonitor / Prometheus-Operator or similar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah right

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Aug 15, 2025
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Aug 15, 2025
@Rishab87
Copy link
Contributor Author

@mrueg made the changes can you please re-review?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

High Availability setup documentation request
4 participants